首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14729篇
  免费   2282篇
  国内免费   1962篇
电工技术   590篇
技术理论   4篇
综合类   1114篇
化学工业   337篇
金属工艺   104篇
机械仪表   329篇
建筑科学   280篇
矿业工程   126篇
能源动力   192篇
轻工业   52篇
水利工程   468篇
石油天然气   146篇
武器工业   70篇
无线电   2854篇
一般工业技术   592篇
冶金工业   148篇
原子能技术   52篇
自动化技术   11515篇
  2024年   40篇
  2023年   362篇
  2022年   497篇
  2021年   591篇
  2020年   694篇
  2019年   512篇
  2018年   488篇
  2017年   627篇
  2016年   760篇
  2015年   885篇
  2014年   1437篇
  2013年   1225篇
  2012年   1377篇
  2011年   1318篇
  2010年   882篇
  2009年   850篇
  2008年   1002篇
  2007年   1066篇
  2006年   784篇
  2005年   742篇
  2004年   577篇
  2003年   537篇
  2002年   385篇
  2001年   314篇
  2000年   197篇
  1999年   169篇
  1998年   118篇
  1997年   99篇
  1996年   81篇
  1995年   76篇
  1994年   64篇
  1993年   40篇
  1992年   39篇
  1991年   21篇
  1990年   24篇
  1989年   21篇
  1988年   15篇
  1987年   10篇
  1986年   10篇
  1985年   6篇
  1984年   5篇
  1983年   4篇
  1982年   3篇
  1981年   2篇
  1980年   3篇
  1979年   3篇
  1977年   4篇
  1976年   2篇
  1972年   2篇
  1951年   1篇
排序方式: 共有10000条查询结果,搜索用时 109 毫秒
41.
This paper does two main contributions to 2D time-dependent vector field topology. First, we present a technique for robust, accurate, and efficient extraction of distinguished hyperbolic trajectories (DHT), the generative structures of 2D time-dependent vector field topology. It is based on refinement of initial candidate curves. In contrast to previous approaches, it is robust because the refinement converges for reasonably close initial candidates, it is accurate due to its adaptive scheme, and it is efficient due to its high convergence speed. Second, we provide a detailed evaluation and discussion of previous approaches for the extraction of DHTs and time-dependent vector field topology in general. We demonstrate the utility of our approach using analytical flows, as well as data from computational fluid dynamics.  相似文献   
42.
Over the last few years, the global biosurfactant market has raised due to the increasing awareness among consumers, for the use of biological or bio-based products. Because of their composition, it can be speculated that these are more biocompatible and more biodegradable than their chemical homologous. However, at the moment, no studies exist in the literature about the biodegradability of biosurfactants. In this work, a biosurfactant contained in a crude extract, obtained from a corn wet-milling industry stream that ferments spontaneously in the presence of lactic acid bacteria, was subjected to a biodegradation study, without addition of external microbial biomass, under different conditions of temperature (5–45 °C), biodegradation time (15–55 days), and pH (5–7). For that, a Box–Behnken factorial design was applied, which allowed to predict the percentage of biodegradation for the biosurfactant contained in the crude extract, between the range of the independent variables selected in the study, obtaining biodegradation values between 3 and 80%. The percentage of biodegradation for the biosurfactant was calculated based on the increase in the surface tension of samples of the crude extract. Furthermore, it was also possible to predict the variation in t1/2 for the biosurfactant (time to achieve the 50% of biodegradation) under different conditions.  相似文献   
43.
Computer-Supported Collaborative Learning (CSCL) is concerned with how Information and Communication Technology (ICT) might facilitate learning in groups which can be co-located or distributed over a network of computers such as Internet. CSCL supports effective learning by means of communication of ideas and information among learners, collaborative access of essential documents, and feedback from instructors and peers on learning activities. As the cloud technologies are increasingly becoming popular and collaborative learning is evolving, new directions for development of collaborative learning tools deployed on cloud are proposed. Development of such learning tools requires access to substantial data stored in the cloud. Ensuring efficient access to such data is hindered by the high latencies of wide-area networks underlying the cloud infrastructures. To improve learners’ experience by accelerating data access, important files can be replicated so a group of learners can access data from nearby locations. Since a cloud environment is highly dynamic, resource availability, network latency, and learner requests may change. In this paper, we present the advantages of collaborative learning and focus on the importance of data replication in the design of such a dynamic cloud-based system that a collaborative learning portal uses. To this end, we introduce a highly distributed replication technique that determines optimal data locations to improve access performance by minimizing replication overhead (access and update). The problem is formulated using dynamic programming. Experimental results demonstrate the usefulness of the proposed collaborative learning system used by institutions in geographically distributed locations.  相似文献   
44.
安全管理平台(SMP)是实现安全管理工作常态化运行的技术支撑平台,在实际应用中需要实时处理来自安全设备所产生的海量日志信息。为解决现有SMP中海量日志查询效率低下的问题,设计基于云计算的SMP日志存储分析系统。基于Hive的任务转化模式,利用Hadoop架构的分布式文件系统和MapReduce并行编程模型,实现海量SMP日志的有效存储与查询。实验结果表明,与基于关系数据的多表关联查询方法相比,该系统使得SMP日志的平均查询效率提高约90%,并能加快SMP集中管控的整体响应速度。  相似文献   
45.
Value stream mapping (VSM) is a useful tool for describing the manufacturing state, especially for distinguishing between those activities that add value and those that do not. It can help in eliminating non-value activities and reducing the work in process (WIP) and thereby increase the service level. This research follows the guidelines for designing future state VSM. These guidelines consist of five factors which can be changed simply, without any investment. These five factors are (1) production unit; (2) pacemaker process; (3) number of batches; (4) production sequence; and (5) supermarket size. The five factors are applied to a fishing net manufacturing system. Using experimental design and a simulation optimizing tool, the five factors are optimized. The results show that the future state maps can increase service level and reduce WIP by at least 29.41% and 33.92% respectively. For the present study, the lean principles are innovatively adopted in solving a fishing net manufacturing system which is not a well-addressed problem in literature. In light of the promising empirical results, the proposed methodologies are also readily applicable to similar industries.  相似文献   
46.
Edon80 is a stream cipher design that had advanced to the third and last phase of the eSTREAM project. The core of the cipher consists quasigroup string e-transformations and it employs four quasigroups of order 4. The employed quasigroups have influence on the period of the keystream. There are 576 quasigroups of order 4 in total which have different period factors. The four quasigroups used in Edon80 were chosen by numerous computer experiments. In this paper, based on permutation groups, we give a theory criterion for the determination of quasigroups and give a complete classification for the 576 quasigroups of order 4 according to their period factors.  相似文献   
47.
Particle swarm optimization (PSO) is a bio-inspired optimization strategy founded on the movement of particles within swarms. PSO can be encoded in a few lines in most programming languages, it uses only elementary mathematical operations, and it is not costly as regards memory demand and running time. This paper discusses the application of PSO to rules discovery in fuzzy classifier systems (FCSs) instead of the classical genetic approach and it proposes a new strategy, Knowledge Acquisition with Rules as Particles (KARP). In KARP approach every rule is encoded as a particle that moves in the space in order to cooperate in obtaining high quality rule bases and in this way, improving the knowledge and performance of the FCS. The proposed swarm-based strategy is evaluated in a well-known problem of practical importance nowadays where the integration of fuzzy systems is increasingly emerging due to the inherent uncertainty and dynamism of the environment: scheduling in grid distributed computational infrastructures. Simulation results are compared to those of classical genetic learning for fuzzy classifier systems and the greater accuracy and convergence speed of classifier discovery systems using KARP is shown.  相似文献   
48.
This study presents a simulation optimization approach for a hybrid flow shop scheduling problem in a real-world semiconductor back-end assembly facility. The complexity of the problem is determined based on demand and supply characteristics. Demand varies with orders characterized by different quantities, product types, and release times. Supply varies with the number of flexible manufacturing routes but is constrained in a multi-line/multi-stage production system that contains certain types and numbers of identical and unrelated parallel machines. An order is typically split into separate jobs for parallel processing and subsequently merged for completion to reduce flow time. Split jobs that apply the same qualified machine type per order are compiled for quality and traceability. The objective is to achieve the feasible minimal flow time by determining the optimal assignment of the production line and machine type at each stage for each order. A simulation optimization approach is adopted due to the complex and stochastic nature of the problem. The approach includes a simulation model for performance evaluation, an optimization strategy with application of a genetic algorithm, and an acceleration technique via an optimal computing budget allocation. Furthermore, scenario analyses of the different levels of demand, product mix, and lot sizing are performed to reveal the advantage of simulation. This study demonstrates the value of the simulation optimization approach for practical applications and provides directions for future research on the stochastic hybrid flow shop scheduling problem.  相似文献   
49.
Mobile battery-operated devices are becoming an essential instrument for business, communication, and social interaction. In addition to the demand for an acceptable level of performance and a comprehensive set of features, users often desire extended battery lifetime. In fact, limited battery lifetime is one of the biggest obstacles facing the current utility and future growth of increasingly sophisticated “smart” mobile devices. This paper proposes a novel application-aware and user-interaction aware energy optimization middleware framework (AURA) for pervasive mobile devices. AURA optimizes CPU and screen backlight energy consumption while maintaining a minimum acceptable level of performance. The proposed framework employs a novel Bayesian application classifier and management strategies based on Markov Decision Processes and Q-Learning to achieve energy savings. Real-world user evaluation studies on Google Android based HTC Dream and Google Nexus One smartphones running the AURA framework demonstrate promising results, with up to 29% energy savings compared to the baseline device manager, and up to 5×savings over prior work on CPU and backlight energy co-optimization.  相似文献   
50.
In mobile-based traffic monitoring applications, each user provides real-time updates on their location and speed while driving. This data is collected by a centralized server and aggregated to provide participants with current traffic conditions. Successful participation in traffic monitoring applications utilizing participatory sensing depends on two factors: the information utility of the estimated traffic condition, and the amount of private information (position and speed) each participant reveals to the server. We assume each user prefers to reveal as little private information as possible, but if everyone withholds information, the quality of traffic estimation will deteriorate. In this paper, we model these opposing requirements by considering each user to have a utility function that combines the benefit of high quality traffic estimates and the cost of privacy loss. Using a novel Markovian model, we mathematically derive a policy that takes into account the mean, variance and correlation of traffic on a given stretch of road and yields the optimal granularity of information revelation to maximize user utility. We validate the effectiveness of this policy through real-world empirical traces collected during the Mobile Century experiment in Northern California. The validation shows that the derived policy yields utilities that are very close to what could be obtained by an oracle scheme with full knowledge of the ground truth.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号